Skip to content

docs: clarify OpenAI Python parse vs response_format guidance#2884

Merged
jannikmaierhoefer merged 1 commit intomainfrom
claude/quirky-montalcini-52d792
May 4, 2026
Merged

docs: clarify OpenAI Python parse vs response_format guidance#2884
jannikmaierhoefer merged 1 commit intomainfrom
claude/quirky-montalcini-52d792

Conversation

@jannikmaierhoefer
Copy link
Copy Markdown
Member

@jannikmaierhoefer jannikmaierhoefer commented Apr 30, 2026

Summary

  • Update the OpenAI Python integration page and the structured output cookbook to recommend client.chat.completions.parse(...) for openai>=1.92.0 and scope the beta caveat to older SDK versions.
  • Note that the Langfuse Python SDK instruments both the stable (openai.resources.chat.completions.Completions.parse) and the legacy beta path, so Langfuse attributes (name, metadata, langfuse_session_id, …) work on either.
  • Keep the response_format + type_to_response_format_param example as a fallback for users who cannot upgrade openai.

Why

Reported by David Traina (Ramp) in Pylon #1339. OpenAI moved parse/stream out of beta in openai-python v1.92.0 ~10 months ago, but our docs still warned against the beta API and pushed users to response_format + chat.completions.create. The SDK has already supported the stable path for a while — only the docs were stale.

Test plan

  • pnpm dev and verify the Structured Output section on /integrations/model-providers/openai-py renders correctly.
  • Verify the regenerated /guides/cookbook/integration_openai_structured_output page renders the updated note and parse example.

🤖 Generated with Claude Code

Disclaimer: Experimental PR review

Greptile Summary

This PR corrects stale documentation that incorrectly told users to avoid client.chat.completions.parse in favour of response_format + create. It updates both the integration page and the cookbook to recommend the stable parse API (available since openai-python v1.92.0) and preserves a type_to_response_format_param fallback for users who cannot upgrade.

Confidence Score: 4/5

Safe to merge — documentation-only changes with accurate technical content and only minor style observations.

All three files are docs/notebook updates with no runtime code. The guidance is factually correct. The only findings are P2: a private-API import risk in the legacy fallback (pre-existing pattern, not introduced here) and mildly ambiguous phrasing in one note.

No files require special attention; the private import in openai-py.mdx is worth a comment but is not blocking.

Important Files Changed

Filename Overview
content/integrations/model-providers/openai-py.mdx Structured Output section rewritten to recommend the stable parse API (openai>=1.92.0) and retain type_to_response_format_param as a legacy fallback; the fallback imports from a private internal module (openai.lib._parsing._completions).
content/guides/cookbook/integration_openai_structured_output.md Note updated to clarify both parse paths are instrumented; Alternative section switched from client.beta.chat.completions.parse to stable client.chat.completions.parse with a Langfuse name attribute; phrasing in the note is slightly ambiguous.
cookbook/integration_openai_structured_output.ipynb Notebook reformatted to 1-space indentation and updated to mirror the .md changes: stable parse path, name attribute added, old output cells preserved.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[User wants Structured Output\nwith Langfuse tracing] --> B{openai SDK version?}
    B -- ">=1.92.0\n(recommended)" --> C["client.chat.completions.parse(...)\nresponse_format=PydanticModel\nname='...' metadata={...}"]
    B -- "<1.92.0\n(legacy)" --> D{Pydantic model needed?}
    D -- Yes --> E["client.beta.chat.completions.parse(...)\nresponse_format=PydanticModel\n(re-routed to stable on >=1.92.0)"]
    D -- No / can't upgrade --> F["type_to_response_format_param(Model)\n→ client.chat.completions.create(...)\nresponse_format=schema_dict"]
    C --> G[Langfuse traces both name\nand metadata attributes ✓]
    E --> G
    F --> G
Loading
Prompt To Fix All With AI
Fix the following 2 code review issues. Work through them one at a time, proposing concise fixes.

---

### Issue 1 of 2
content/integrations/model-providers/openai-py.mdx:101
**Private internal API in legacy fallback**

`openai.lib._parsing._completions` is an underscore-prefixed internal module — it is not part of OpenAI's public API surface and can be removed or renamed without a semver-breaking release. Users who follow this fallback path are silently depending on an implementation detail that could break on any minor OpenAI SDK bump, even within `<1.92.0`. Consider noting this risk explicitly, or suggesting users pin their OpenAI version when using this path.

### Issue 2 of 2
content/guides/cookbook/integration_openai_structured_output.md:58
**Slightly ambiguous parenthetical in the note**

The clause "for older SDK versions, where `parse` is re-routed to the stable method on newer SDKs" embeds a forward-reference to newer-SDK behaviour inside the description of the older-SDK path, which can read as contradictory. Consider splitting the two facts into separate sentences, e.g. "…the legacy `client.beta.chat.completions.parse(...)` (available on `openai<1.92.0`). On `openai>=1.92.0` the OpenAI SDK re-routes beta calls to the stable method, so either path reaches the same instrumented function."

Reviews (1): Last reviewed commit: "docs: clarify OpenAI Python parse vs res..." | Re-trigger Greptile

openai-python>=1.92.0 graduated parse/stream out of beta. The Langfuse
SDK already instruments both client.chat.completions.parse (stable) and
client.beta.chat.completions.parse (legacy), so the previous "use
response_format with chat.completions.create instead of the Beta API"
guidance was outdated. Update the OpenAI Python integration page and
the structured output cookbook to recommend the stable parse helper and
scope the beta caveat to older SDKs.

Reported by David Traina (Ramp) via support.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Langfuse-Session: https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/sessions/b55454d8-1e5e-477d-a6ba-f81f65fe959c
@vercel
Copy link
Copy Markdown

vercel Bot commented Apr 30, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
langfuse-docs Ready Ready Preview, Comment Apr 30, 2026 6:55am

Request Review

@dosubot dosubot Bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Apr 30, 2026
@review-notebook-app
Copy link
Copy Markdown

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@github-actions
Copy link
Copy Markdown

@claude review

@dosubot dosubot Bot added the documentation Improvements or additions to documentation label Apr 30, 2026
Copy link
Copy Markdown

@claude claude Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docs-only clarification of the OpenAI Python parse vs response_format guidance — the inline nits are minor wording suggestions and the underlying changes are correct.

Extended reasoning...

Overview

This PR updates the OpenAI Python integration docs (content/integrations/model-providers/openai-py.mdx) and the matching structured-output cookbook (.md and .ipynb) to recommend client.chat.completions.parse(...) for openai>=1.92.0, scope the beta caveat to older SDK versions, and keep the response_format + type_to_response_format_param example as a fallback. It also adds a new parse example with name/metadata Langfuse attributes.

Security risks

None. This change touches only Markdown/MDX/notebook content with no runtime, auth, or user-input handling implications.

Level of scrutiny

Low — documentation-only, no code paths or configuration are affected. The factual claim being introduced (parse graduated out of beta in openai-python v1.92.0 and the Langfuse SDK instruments both the stable and beta parse paths) is consistent with the linked release notes and the rest of the integration docs.

Other factors

The two inline nits posted are wording-level: (1) the new bullet groups langfuse_session_id alongside direct kwargs even though it is a metadata key, and (2) the #### Structured Output subsection now lives under an ### OpenAI Beta APIs parent whose intro still says beta APIs require manual @observe() wrapping. Neither is incorrect documentation per se — the canonical 'Custom trace properties' table and a correct metadata={...} example are right above and below the new prose — and a Vercel preview is already building for visual verification. These are the kind of small editorial tweaks a maintainer can take or leave; they don't gate approval.

Comment thread content/integrations/model-providers/openai-py.mdx
Comment thread content/integrations/model-providers/openai-py.mdx
@jannikmaierhoefer jannikmaierhoefer added this pull request to the merge queue May 4, 2026
@dosubot dosubot Bot added the auto-merge This PR is set to be merged label May 4, 2026
Merged via the queue into main with commit 6735064 May 4, 2026
15 checks passed
@jannikmaierhoefer jannikmaierhoefer deleted the claude/quirky-montalcini-52d792 branch May 4, 2026 12:33
@dosubot dosubot Bot removed the auto-merge This PR is set to be merged label May 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant